4 research outputs found

    Informants in Organizational Marketing Research

    Get PDF
    Organizational research frequently involves seeking judgmental data from multiple informants within organizations. Researchers are often faced with determining how many informants to survey, who those informants should be and (if more than one) how best to aggregate responses when disagreement exists between those responses. Using both recall and forecasting data from a laboratory study involving the MARKSTRAT simulation, we show that when there are multiple respondents who disagree, responses aggregated using confidence-based or competence-based weights outperform those with data-based weights, which in turn provide significant gains in estimation accuracy over simply averaging respondent reports. We then illustrate how these results can be used to determine the best number of respondents for a market research task as well as to provide an effective screening mechanism when seeking a single, best informant

    Institutional Forecasting: The Performance of Thin Virtual Stock Markets

    Get PDF
    We study the performance of Virtual Stock Markets (VSMs) in an institutional forecasting environment. We compare VSMs to the Combined Judgmental Forecast (CJF) and the Key Informant (KI) approach. We find that VSMs can be effectively applied in an environment with a small number of knowledgeable informants, i.e., in thin markets. Our results show that none of the three approaches differ in forecasting accuracy in a low knowledge-heterogeneity environment. However, where there is high knowledge-heterogeneity, the VSM approach outperforms the CJF approach, which in turn outperforms the KI approach. Hence, our results provide useful insight into when each of the three approaches might be most effectively applied

    How and Why Decision Models Influence Marketing Resource Allocations

    Get PDF
    We study how and why model-based Decision Support Systems (DSSs) influence managerial decision making, in the context of marketing budgeting and resource allocation. We consider several questions: (1) What does it mean for a DSS to be "good?"; (2) What is the relationship between an anchor or reference condition, DSS-supported recommendation and decision quality? (3) How does a DSS influence the decision process, and how does the process influence outcomes? (4) Is the effect of the DSS on the decision process and outcome robust, or context specific? We test hypotheses about the effects of DSSs in a controlled experiment with two award winning DSSs and find that, (1) DSSs improve users' objective decision outcomes (an index of likely realized revenue or profit); (2) DSS users often do not report enhanced subjective perceptions of outcomes; (3) DSSs, that provide feedback in the form of specific recommendations and their associated projected benefits had a stronger effect both on the decision making process and on the outcomes. Our results suggest that although managers actually achieve improved outcomes from DSS use, they may not perceive that the DSS has improved the outcomes. Therefore, there may be limited interest in managerial uses of DSSs, unless they are designed to: (1) encourage discussion (e.g., by providing explanations and support for the recommendations), (2) provide feedback to users on likely marketplace results, and (3) help reduce the perceived complexity of the problem so that managers will consider more alternatives and invest more cognitive effort in searching for improved outcomes

    How Feedback Can Improve Managerial Evaluations of Model-based Marketing Decision Support Systems

    Get PDF
    Marketing managers often provide much poorer evaluations of model-based marketing decision support systems (MDSSs) than are warranted by the objective performance of those systems. We show that a reason for this discrepant evaluation may be that MDSSs are often not designed to help users understand and internalize the underlying factors driving the MDSS results and related recommendations. Thus, there is likely to be a gap between a marketing manager’s mental model and the decision model embedded in the MDSS. We suggest that this gap is an important reason for the poor subjective evaluations of MDSSs, even when the MDSSs are of high objective quality, ultimately resulting in unreasonably low levels of MDSS adoption and use. We propose that to have impact, an MDSS should not only be of high objective quality, but should also help reduce any mental model-MDSS model gap. We evaluate two design characteristics that together lead model-users to update their mental models and reduce the mental model-MDSS gap, resulting in better MDSS evaluations: providing feedback on the upside potential for performance improvement and providing specific suggestions for corrective actions to better align the user's mental model with the MDSS. We hypothesize that, in tandem, these two types of MDSS feedback induce marketing managers to update their mental models, a process we call deep learning, whereas individually, these two types of feedback will have much smaller effects on deep learning. We validate our framework in an experimental setting, using a realistic MDSS in the context of a direct marketing decision problem. We then discuss how our findings can lead to design improvements and better returns on investments in MDSSs such as CRM systems, Revenue Management systems, pricing decision support systems, and the like
    corecore